Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[finetune-llm] Training configs for 70b full parameter finetuning #85

Merged
merged 6 commits into from
Feb 23, 2024

Conversation

amogkam
Copy link
Contributor

@amogkam amogkam commented Feb 23, 2024

Add YAMLs for more Llama 70b finetuning configurations

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>
Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>
Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>
Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>
Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>
@@ -8,6 +8,7 @@ train_batch_size_per_device: 8
eval_batch_size_per_device: 8
learning_rate: 5e-6
num_checkpoints_to_keep: 1
dataset_size_scaling_factor: 10000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does this do?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This removes llmforge's restriction on dataset size. This is currently a workaround, but in the future we should remove the default dataset size restriction in llmforge and only enable it for public endpoints. This is being tracked and I will make an issue for this.

@amogkam amogkam merged commit 9491023 into main Feb 23, 2024
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants